168 research outputs found

    Solving Satisfiability Problems with Genetic Algorithms

    Get PDF
    We show how to solve hard 3-SAT problems using genetic algorithms. Furthermore, we explore other genetic operators that may be useful to tackle 3-SAT problems, and discuss their pros and cons

    Computing Functions of Random Variables via Reproducing Kernel Hilbert Space Representations

    Full text link
    We describe a method to perform functional operations on probability distributions of random variables. The method uses reproducing kernel Hilbert space representations of probability distributions, and it is applicable to all operations which can be applied to points drawn from the respective distributions. We refer to our approach as {\em kernel probabilistic programming}. We illustrate it on synthetic data, and show how it can be used for nonparametric structural equation models, with an application to causal inference

    Technical report on implementation of linear methods and validation on acoustic sources

    No full text

    Technical report on Separation methods for nonlinear mixtures

    Get PDF

    Validation of nonlinear PCA

    Full text link
    Linear principal component analysis (PCA) can be extended to a nonlinear PCA by using artificial neural networks. But the benefit of curved components requires a careful control of the model complexity. Moreover, standard techniques for model selection, including cross-validation and more generally the use of an independent test set, fail when applied to nonlinear PCA because of its inherent unsupervised characteristics. This paper presents a new approach for validating the complexity of nonlinear PCA models by using the error in missing data estimation as a criterion for model selection. It is motivated by the idea that only the model of optimal complexity is able to predict missing values with the highest accuracy. While standard test set validation usually favours over-fitted nonlinear PCA models, the proposed model validation approach correctly selects the optimal model complexity.Comment: 12 pages, 5 figure

    Comparison of cardiac volumetry using real-time MRI during free-breathing with standard cine MRI during breath-hold in children

    Get PDF
    Background Cardiac real-time magnetic resonance imaging (RT-MRI) provides high-quality images even during free- breathing. Difficulties in post-processing impede its use in clinical routine. Objective To demonstrate the feasibility of quantitative analysis of cardiac free-breathing RT-MRI and to compare image quality and volumetry during free-breathing RT-MRI in pediatric patients to standard breath-hold cine MRI. Materials and methods Pediatric patients (n= 22) received cardiac RT-MRI volumetry during free breathing (1.5 T; short axis; 30 frames per s) in addition to standard breath-hold cine imaging in end-expiration. Real-time images were binned retrospec- tively based on electrocardiography and respiratory bellows. Image quality and volumetry were compared using the European Cardiovascular Magnetic Resonance registry score, structure visibility rating, linear regression and Bland–Altman analyses. Results Additional time for binning of real-time images was 2 min. For both techniques, image quality was rated good to excellent. RT-MRI was significantly more robust against artifacts (P< 0.01). Linear regression revealed good correlations for the ventricular volumes. Bland–Altman plots showed a good limit of agreement (LoA) for end-diastolic volume (left ventricle [LV]: LoA -0.1 ± 2.7 ml/m2, right ventricle [RV]: LoA -1.9 ± 3.4 ml/m2), end-systolic volume (LV: LoA 0.4 ± 1.9 ml/m2, RV: LoA 0.6 ± 2.0 ml/m2), stroke volume (LV: LoA -0.5± 2.3 ml/m2, RV: LoA -2.6± 3.3 ml/m2) and ejection fraction (LV: LoA -0.5 ± 1.6%, RV: LoA -2.1 ± 2.8%). Conclusion Compared to standard cine MRI with breath hold, RT-MRI during free breathing with retrospective respiratory binning offers good image quality, reduced image artifacts enabling fast quantitative evaluations of ventricular volumes in clinical practice under physiological conditions

    Image analysis for cosmology: results from the GREAT10 Galaxy Challenge

    Get PDF
    In this paper, we present results from the weak-lensing shape measurement GRavitational lEnsing Accuracy Testing 2010 (GREAT10) Galaxy Challenge. This marks an order of magnitude step change in the level of scrutiny employed in weak-lensing shape measurement analysis. We provide descriptions of each method tested and include 10 evaluation metrics over 24 simulation branches. GREAT10 was the first shape measurement challenge to include variable fields; both the shear field and the point spread function (PSF) vary across the images in a realistic manner. The variable fields enable a variety of metrics that are inaccessible to constant shear simulations, including a direct measure of the impact of shape measurement inaccuracies, and the impact of PSF size and ellipticity, on the shear power spectrum. To assess the impact of shape measurement bias for cosmic shear, we present a general pseudo-Cℓ formalism that propagates spatially varying systematics in cosmic shear through to power spectrum estimates. We also show how one-point estimators of bias can be extracted from variable shear simulations. The GREAT10 Galaxy Challenge received 95 submissions and saw a factor of 3 improvement in the accuracy achieved by other shape measurement methods. The best methods achieve sub-per cent average biases. We find a strong dependence on accuracy as a function of signal-to-noise ratio, and indications of a weak dependence on galaxy type and size. Some requirements for the most ambitious cosmic shear experiments are met above a signal-to-noise ratio of 20. These results have the caveat that the simulated PSF was a ground-based PSF. Our results are a snapshot of the accuracy of current shape measurement methods and are a benchmark upon which improvement can be brought. This provides a foundation for a better understanding of the strengths and limitations of shape measurement method

    Towards Zero Training for Brain-Computer Interfacing

    Get PDF
    Electroencephalogram (EEG) signals are highly subject-specific and vary considerably even between recording sessions of the same user within the same experimental paradigm. This challenges a stable operation of Brain-Computer Interface (BCI) systems. The classical approach is to train users by neurofeedback to produce fixed stereotypical patterns of brain activity. In the machine learning approach, a widely adapted method for dealing with those variances is to record a so called calibration measurement on the beginning of each session in order to optimize spatial filters and classifiers specifically for each subject and each day. This adaptation of the system to the individual brain signature of each user relieves from the need of extensive user training. In this paper we suggest a new method that overcomes the requirement of these time-consuming calibration recordings for long-term BCI users. The method takes advantage of knowledge collected in previous sessions: By a novel technique, prototypical spatial filters are determined which have better generalization properties compared to single-session filters. In particular, they can be used in follow-up sessions without the need to recalibrate the system. This way the calibration periods can be dramatically shortened or even completely omitted for these ‘experienced’ BCI users. The feasibility of our novel approach is demonstrated with a series of online BCI experiments. Although performed without any calibration measurement at all, no loss of classification performance was observed

    The Political Economy of Climate Resilient Development Planning in Bangladesh

    Get PDF
    Following three major disasters in 2007, Bangladesh intensified its effort to tackle climate change through development of the Bangladesh Climate Change Strategy and Action Plan (BCCSAP). The process of plan formulation led to debates nationally and internationally regarding the financing and integration of climate change into development planning. Using a political economic lens, this article illustrates how major national initiatives around international problems must be understood in terms of the interplay of actors, their ideas and power relations. The article argues that: (i) Power relations among actors significantly influenced the selection of ideas and implementation activities; (ii) Donor concerns around aid effectiveness and consequent creation of parallel mechanisms of planning and implementation may run counter to both the mainstreaming process and the alignment of assistance with country priorities and systems; (iii) Climate change planning processes must be opened up to include actors from across sectors, population groups and geographical areas
    • 

    corecore